منابع مشابه
Addendum to “Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches”
Abstract The MICRO 2011 paper “Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches” proposed a novel die-stacked DRAM cache organization embedding the tags and data within the same physical DRAM row and then using compound access scheduling to manage the hit latency and a MissMap structure to make misses more efficient. This addendum provides a revised performan...
متن کاملEfficient RAS support for 3D Die-Stacked DRAM
Die-stacked DRAM is one of the most promising memory architectures to satisfy high bandwidth and low latency needs of many computing systems. But, with technology scaling, all memory devices are expected to experience significant increase in single and multi-bit errors. 3D die-stacked DRAM will have the added burden of protecting against single through-siliconvia (TSV) failures, which translate...
متن کاملExploring the Design Space of DRAM Caches
Die-stacked DRAM caches represent an emerging technology that offers a new level of cache between SRAM caches and main memory. As compared to SRAM, DRAM caches offer high capacity and bandwidth but incur high access latency costs. Therefore, DRAM caches face new design considerations that include the placement and granularity of tag storage in either DRAM or SRAM. The associativity of the cache...
متن کاملAn Operating System Resilient to DRAM Failures
Concern is growing in the high-performance computing (HPC) community on the reliability of future extreme scale systems. With systems continuing to grow dramatically in node count and individual nodes also increasing in component count and complexity, large-scale systems are becoming less reliable. In fact, experts are predicting that failure rates may go from the current state of a handful a d...
متن کاملC3D: Mitigating the NUMA bottleneck via coherent DRAM caches
Massive datasets prevalent in scale-out, enterprise, and high-performance computing are driving a trend toward ever-larger memory capacities per node. To satisfy the memory demands and maximize performance per unit cost, today’s commodity HPC and server nodes tend to feature multi-socket shared memory NUMA organizations. An important problem in these designs is the high latency of accessing mem...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM SIGARCH Computer Architecture News
سال: 2013
ISSN: 0163-5964
DOI: 10.1145/2508148.2485958